Goto

Collaborating Authors

 teacher data


Towards Understanding Subliminal Learning: When and How Hidden Biases Transfer

Schrodi, Simon, Kempf, Elias, Barez, Fazl, Brox, Thomas

arXiv.org Artificial Intelligence

Language models can transfer hidden biases during distillation. For example, a teacher that "likes owls" can make its student "like owls" too, even when the training data consists only of lists of numbers. This surprising phenomenon is called subliminal learning. Subliminal learning can be expected under soft distillation, where the student is trained on the teacher's full next-token distribution. But the fact that this also occurs under hard distillation--where the student only sees sampled tokens--raises a deeper question: when and how does subliminal learning actually occur? We answer this question through controlled experiments and mechanistic analysis. Our results show that subliminal learning does not need (global) token entanglement or logit leakage. Instead, it comes down to a small set of divergence tokens--rare cases where teachers with different biases would predict different tokens. Masking out these tokens mostly removes the hidden bias transfer. Mechanistically, divergence tokens reveal that early layers are critical. Surprisingly, finetuning even a single such early layer is sufficient for subliminal learning. Finally, we find that subliminal learning is fragile. Even small changes, like paraphrasing prompts, are usually sufficient to suppress it. Distillation is a core technique for compressing models or transferring knowledge, where a student model is trained to imitate a teacher (Hinton et al., 2015; Ba & Caruana, 2014). The common view is that what transfers depends on the (semantic) content of the training data (Dong et al., 2023; Guan et al., 2024; Chen et al., 2025; Li et al., 2025). In this view, if the teacher's outputs do not show a trait--such as a bias toward an animal or misaligned behavior--the student should not learn it.


ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation

Takahashi, Tomokatsu, Yamada, Masanori, Yamanaka, Yuuki, Yamashita, Tomoya

arXiv.org Artificial Intelligence

Adversarial training is the most promising method for learning robust models against adversarial examples. A recent study has shown that knowledge distillation between the same architectures is effective in improving the performance of adversarial training. Exploiting knowledge distillation is a new approach to improve adversarial training and has attracted much attention. However, its performance is still insufficient. Therefore, we propose Adversarial Robust Distillation with Internal Representation~(ARDIR) to utilize knowledge distillation even more effectively. In addition to the output of the teacher model, ARDIR uses the internal representation of the teacher model as a label for adversarial training. This enables the student model to be trained with richer, more informative labels. As a result, ARDIR can learn more robust student models. We show that ARDIR outperforms previous methods in our experiments.


『Aiとは 異論、暴論、オブジェクション』

#artificialintelligence

What is Ai? Shake the material data in a grid (forward propagation) Compare the remaining material data with the standard (teacher data) Adjust the fineness of the grid (backpropagation) Repeat until the teacher data is satisfied Finally statistical processing do.